Skip to main content
Log in

Prediction of the number of students taking make-up examinations using artificial neural networks

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Three different examinations for any course are primarily defined in higher education in Turkey: midterm, final and make-up exams. Whether a student has passed a course is decided by using the scores of midterm and final exams. If this student fails the course as a result of these exams, he can take a make-up exam of this course, and the score of the make-up exam is replaced with the final exam. However, some of the students do not take the make-up exam although it is expected that they take the make-up exam, due to different reasons such as average score, distance, low score of midterm exam, etc. Because the make-up exam plans and schedule have been performed in accordance with the number of students who failed the course, some resources such as the number of classrooms, invigilators, exam papers, toner are wasted. In order to reduce these wastages, we applied artificial neural networks, ANN, trained by different approaches for predicting the number of students taking make-up examinations in this study. In the proposed framework, some features of students and courses have been determined, the data has been collected and ANNs have been trained on these datasets. By using the trained ANNs, each student who fails the course is classified as positive (taking the make-up exam) or negative (not taking the make-up exam). In the experiments, the data of ten different courses are used for training ANNs by random weight network, error back propagation algorithm, some metaheuristic algorithms such as grey wolf optimizer, artificial bee colony, particle swarm optimization, ant colony optimization, etc. The performances of the trained ANNs have been compared with each other by considering training accuracy, testing accuracy, training time. BP achieves the best mean training accuracy on both unnormalized and normalized datasets with 99.36% and 99.7%, respectively. GWO achieves the best mean testing accuracy on both unnormalized and normalized datasets with 80.39% and 82.39%, respectively. Moreover, RWN has the best running time of less than a second for training the ANN on both normalized and unnormalized datasets. The experiments and comparisons show that an ANN-based classifier can be used for determining the number of students taking the make-up exam.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Aslan S, Ciocca G, Mazzini D, Schettini R (2020) Benchmarking algorithms for food localization and semantic segmentation. Int J Mach Learn Cybern 11:2827–2847

    Article  Google Scholar 

  2. Zhang HG, Yang JF, Jia GM, Han SC, Zhou XR (2020) ELM-MC: multi-label classification framework based on extreme learning machine. Int J Mach Learn Cybern 11:2261–2274

    Article  Google Scholar 

  3. Waszczyszyn Z (2000) Fundamentals of Artificial Neural Networks. In: Waszczyszyn Z (ed) Neural Networks in the Analysis and Design of Structures. Springer Vienna, Vienna, pp 1–51

  4. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: Theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  5. Zhu H, Tsang ECC, Wang XZ, Ashfaq RAR (2017) Monotonic classification extreme learning machine. Neurocomputing 225:205–213

    Article  Google Scholar 

  6. Koedinger K, Cunningham K, Skogsholm A, Leber B (2008) An Open Repository and Analysis Tools for Fine-grained, Longitudinal Learner Data. In: Proceedings of 1st International Conference on Educational Data Mining, Canada, pp 157–166

  7. Romero C, Ventura S (2013) Data mining in education. Wiley Interdiscip Rev Data Min Knowl Discov 3:12–27

    Article  Google Scholar 

  8. Hsu Y (2015) Student Academic Achievement Prediction by a Fusion Mechanism. In: Proceedings of 7th international conference on education and new learning technologies, Barcelona, pp 1900–1902

  9. Calders T, Pechenizkiy M (2012) Introduction to the special section on educational data mining. ACM SIGKDD Explor Newsl 13:3–6

    Article  Google Scholar 

  10. Cen L, Ruta D, Powell L, Hirsch B, Ng J (2016) Quantitative approach to collaborative learning: performance prediction, individual assessment, and group composition. Int J Comput Support Collab 11:187–225

    Google Scholar 

  11. Tekin A (2014) Early prediction of students’ grade point averages at graduation: a data mining approach. Eurasian J Educ Res 54:207–226

    Article  Google Scholar 

  12. Mubarak AA, Cao H, Zhang W, Zhang W (2020) Visual analytics of video-clickstream data and prediction of learners' performance using deep learning models in MOOCs' courses. Comp Appl Eng Edu 1–23

  13. Costa-Mendes R, Oliveira T, Castelli M, Cruz-Jesus F (2021) A machine learning approximation of the 2015 Portuguese high school student grades: a hybrid approach. Edu Inf Technol 26:1527–1547

  14. Ozkan UB, Cigdem H, Erdogan T (2020) Artificial neural network approach to predict lms acceptance of vocational school students. Turk Online J Distance 21:156–169

    Article  Google Scholar 

  15. Wang XH, Yu XM, Guo L, Liu FG, Xu LC (2020) Student performance prediction with short-term sequential campus behaviors. Information 11(4):201

    Article  Google Scholar 

  16. Musso MF, Hernandez CFR, Cascallar EC (2020) Predicting key educational outcomes in academic trajectories: a machine-learning approach. High Educ 80:875–894

    Article  Google Scholar 

  17. Sağ T, Jalil ZAJ (2021) Vortex search optimization algorithm for training of feed-forward neural network. Int J Mach Learn Cybernet vol 12:1517–1544

  18. Dash RK, Nguyen TN, Cengiz K, Sharma A (2021) Fine-tuned support vector regression model for stock predictions. Neur Comput Appl. 1:15. https://doi.org/10.1007/s00521-021-05842-w

  19. Cinar AC (2020) Training feed-forward multi-layer perceptron artificial neural networks with a tree-seed algorithm. Arab J Sci Eng 45:10915–10938

    Article  Google Scholar 

  20. Turkoglu B, Kaya E (2020) Training multi-layer perceptron with artificial algae algorithm. Eng Sci Technol 23:1342–1350

    Google Scholar 

  21. Heidari AA, Faris H, Mirjalili S, Aljarah I, Mafarja M (2020) Ant Lion Optimizer: Theory, Literature Review, and Application in Multi-layer Perceptron Neural Networks. In: Mirjalili S, Song Dong J, Lewis A (eds). Nature-Inspired Optimizers, Studies in Computational Intelligence, Vol. 811. Springer, Cham, pp 23–46

  22. Zhang X, Wang X, Chen H, Wang D, Fu Z (2020) Improved GWO for large-scale function optimization and MLP optimization in cancer identification. Neural Comput Appl 32:1305–1325

    Article  Google Scholar 

  23. Xu F, Pun C-M, Li H, Zhang Y, Song Y, Gao H (2020) Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing 416:69–84

    Article  Google Scholar 

  24. Banharnsakun A (2019) Towards improving the convolutional neural networks for deep learning using the distributed artificial bee colony method. Int J Mach Learn Cybern 10:1301–1311

    Article  Google Scholar 

  25. Amirsadri S, Mousavirad SJ, Ebrahimpour-Komleh H (2018) A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Comput Appl 30:3707–3720

    Article  Google Scholar 

  26. Alshamiri AK, Singh A, Surampudi BR (2018) Two swarm intelligence approaches for tuning extreme learning machine. Int J Mach Learn Cybern 9:1271–1283

    Article  Google Scholar 

  27. Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45:322–332

    Article  Google Scholar 

  28. Nawi NM, Khan A, Rehman MZ (2013) A new back-propagation neural network optimized with cuckoo search algorithm. In: Murgante B, Misra S, Carlini M, Torre CM, Nguyen H-Q, Taniar D, Apduhan BO, Gervasi O (eds) Computational science and its applications—ICCSA 2013. Springer, Berlin, pp 413–426

    Chapter  Google Scholar 

  29. Nandy S, Sarkar PP, Das A (2012) Training a feed-forward neural network with artificial bee colony based backpropagation method. Int J Comp Sci Inf Technol 4:1–14

  30. Nawi M, Khan A, Rehman MZ, Aziz MA, Herawan T, Abawajy JH (2014) An accelerated particle swarm optimization based Levenberg Marquardt back propagation algorithm. In: Loo CK, Yap KS, Wong KW, Teoh A, Huang K (eds) Neural information processing. Springer International Publishing, Cham, pp 245–253

    Google Scholar 

  31. Zhu C, Zhao X, (2009) PSO-based neural network model for teaching evaluation. In: Proceeding of the 4th international conference on computer Science & education, Nanning, China, pp 53–55

  32. Alkhasawneh R, Hargraves RH (2014) Developing a hybrid model to predict student first year retention in STEM disciplines using machine learning techniques. J STEM Educ Innov Res 15:35–42

  33. Carrasco MP, Pato MV (2004) A potts neural network heuristic for the class/teacher timetabling problem. In: Resende MGC, de Sousa JP (eds) Metaheuristics: computer decision-making. Springer US, Boston, pp 173–186

    Google Scholar 

  34. Gregori EB, Zhang JJ, Galvan-Fernandez C, Fernandez-Navarro FD (2018) Learner support in MOOCs: identifying variables linked to completion. Comput Educ 122:153–168

    Article  Google Scholar 

  35. Wang BJ, Wang J, Hu GQ (2017) College english classroom teaching evaluation based on particle swarm optimization—extreme learning machine model. Int J Emerg Technol 12:82–97

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Konya Technical University for providing datasets.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mustafa Servet Kiran.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kiran, M.S., Siramkaya, E., Esme, E. et al. Prediction of the number of students taking make-up examinations using artificial neural networks. Int. J. Mach. Learn. & Cyber. 13, 71–81 (2022). https://doi.org/10.1007/s13042-021-01348-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-021-01348-y

Keywords

Navigation